Artificial intelligence (AI) has revolutionized many industries, but its use in health insurance claims processing is stirring sharp debate. Health insurance giants like Cigna, Humana, and UnitedHealth Group are facing serious allegations over their use of algorithm-driven systems to deny claims at a staggering rate. With lives at stake and lawsuits piling up, the practice has raised deep ethical and legal concerns.
The Allegations Against Industry Leaders
The controversy largely centers on accusations that AI algorithms are being used to wrongfully deny claims, often in mere seconds. One lawsuit claims Cigna rejected over 300,000 claims in just two months using algorithmic systems that reviewed each request in only 1.2 seconds. Critics argue such speed makes meaningful human review impossible.
Similar allegations confront UnitedHealth Group, accused of pressuring employees to use its AI tool, nH Predict, to limit lengthy patient stays in rehabilitation centers. According to lawsuits, this system has a 90% error rate, with the majority of denials reversed upon appeal. However, few patients actually appeal due to complex processes, leaving many to face exorbitant medical bills.
Even Humana has come under fire for using the same predictive algorithm to cut short rehabilitative care payments, a move that some claim prioritizes cost savings over patient welfare. The industry has denied these accusations. Both UnitedHealth and Cigna argue their tools focus on efficiency and adherence to medical guidelines, though experts and patients question whether ethics are being sidelined for profits.
Stories of Human Resilience Amid Denials
Beneath the surface of this debate are real people wading through heart-wrenching struggles. Deirdre O’Reilly, an intensive care physician, understands the crisis on both a professional and personal level. When her son faced a severe allergic reaction, his emergency room visit was denied by BlueCross BlueShield of Vermont, leaving her with a $5,000 bill. She appealed four times, only to be met with conflicting explanations.
“My son didn’t have a choice,” O’Reilly said. “He was going to die if he didn’t go to the ER.” She’s seen similar patterns play out with her patients, including premature infants denied critical oxygen equipment.
For those without the medical knowledge or resources O’Reilly possesses, the process of appealing AI-driven decisions can be overwhelming. Many patients have no choice but to give up or pay out of pocket, worsening inequities in the system. According to surveys, few patients understand their right to appeal, with less than 0.2% doing so each year.
Nearly half of U.S. adults report having received unexpected medical bills, with many saying delays in care caused by insurance denials led to worsened health outcomes. The emotional toll on families is immense, with stories like O’Reilly’s only scratching the surface of a systemic problem.
Industry Responses and Ethical Dilemmas
Faced with these allegations, health insurers defend their practices, emphasizing the role of medical professionals in the final decision-making process. Cigna, for example, insists that its claim review system, PxDx, is not powered by AI but instead uses long-standing sorting technology to match claims with diagnostic codes. Similarly, UnitedHealth states its algorithms serve as tools to assist its clinical teams, rather than replace them.
Yet, according to critics like Glenn Danas, partner at Clarkson Law Firm, these reassurances ring hollow. Danas, who represents many of the plaintiffs suing these companies, argues that automation isn’t the problem itself, but rather how it’s used to maximize profits at the expense of patients.
Expert opinions are mixed. While some warn against removing human oversight from decision-making, others see constructive possibilities for AI. Mika Hamer, a professor of health policy and management at the University of Maryland, believes algorithms can improve healthcare systems if ethical safeguards exist. She notes, however, that the root issues, such as high healthcare costs and opaque billing systems, remain unresolved.
Lawmakers are stepping in as well. States like California have banned automation from finalizing coverage decisions, requiring physician oversight to protect patients. Meanwhile, federal regulators are considering new rules to ensure greater accountability in claims processing.
Responsible Use of AI in Healthcare
With the growing presence of AI in healthcare, finding a balance between innovation and ethics is critical. Some experts believe AI can be a force for good, streamlining complex paperwork and reducing delays in approvals. For example, generative AI tools are being developed to help patients draft appeal letters, leveling the playing field against traditional bureaucratic hurdles.
Still, the fear remains that AI, if unchecked, could turn a patient-centered system into one driven solely by efficiency metrics. The technology’s ability to evolve hinges on oversight and transparency. Consumers must understand how claims are processed and have easy access to appeal denials.
AI is clearly here to stay in healthcare, but its use demands careful scrutiny. Insurers, policymakers, and healthcare advocates must collaborate to create systems that merge cost efficiency with ethical integrity. Only then can AI truly serve its intended purpose—to enhance, not hinder, patient care.
The ongoing lawsuits against industry leaders like Cigna, Humana, and UnitedHealth Group could serve as a turning point for how automation is integrated into healthcare. For now, the message is clear: human lives are more than just data points, and no algorithm should determine their worth. The future of AI in healthcare now rests on its ability to coexist with empathy, fairness, and accountability.